Massive Random Access of M2M Communications in LTE Networks

กก

Machine-to-Machine (M2M) communications is one of the most important enabling technologies for the emerging Internet-of-Things paradigm that has found wide applications in various domains such as smart grid, intelligent transportation, and e-health. To facilitate M2M communications, the most natural and appealing solution is the Long Term Evolution (LTE) cellular system owing to its ubiquitous coverage. With many Machine-Type Devices (MTDs) attempting to initiate connection with the Base Station (BS), however, the deluge of access requests will cause severe congestion with low chances of success. Considering the exponential increase in the number of MTDs, there is an urgent need to optimize the access efficiency of LTE networks for accommodating massive access from M2M communications.

Although significant efforts have been made to improve the access performance of MTDs, how to optimally tune the backoff parameters of each device to maximize the access efficiency has remained largely unknown. The challenge originates from the lack of proper modeling of the random access process of LTE networks. Existing models either exclude the device-level input parameters by ignoring the queueing behavior of each MTD, or become unscalable in the massive access scenarios, neither of which can facilitate the optimization of access performance of massive MTDs. As we pointed out in [Dai'12] and [Dai'13], a random-access network can be regarded as a multi-queue-single-server system, and the key to performance analysis lies in proper characterization of the service time distribution, which is difficult to obtain with either each node's queue completely ignored or interactions among nodes' queues taken into full consideration. To reduce the modeling complexity, we demonstrated in [Dai'12] and [Dai'13] that a scalable node-centric model can be established by treating each node's queue as an independent queueing system with identically distributed service time, which consists of two parts: 1) state characterization of each individual head-of-line (HOL) packet; and 2) characterization of network steady-state points based on the fixed-point equations of limiting probability of successful transmission of HOL packets. The proposed modeling methodology has been successfully applied to WiFi networks, and shown to be accurate. In most scenarios, the network steady-state points can be obtained as explicit functions of key system parameters such as the number of nodes and the backoff window size/transmission probability of each node, based on which the optimal network performance can further be characterized.

The above model, however, cannot be directly applied to M2M communications in LTE networks. Specifically, a basic assumption of the aforementioned studies is that each single packet in nodes' queues has to contend for channel access. In LTE networks, however, a connection would first be established between a device and the BS before the device starts to transmit its data packets. That is, each device with data packets to transmit first sends a connection request to the BS, and if a device's request is successfully received, then the BS will allocate resource blocks for the device to clear its data queue. We can see that the connection-based random access adopted in the LTE networks differs from the conventional packet-based random access in that the data packets do not contend for the channel individually, which calls for new scalable node-centric models.

In this paper, we propose a new analytical framework for optimizing the access efficiency of M2M communications in LTE networks. Specifically, to capture the key feature of connection-based random access process, a novel double-queue model is established, where each MTD has one request queue and one data queue, and only the request queue is involved in the contention. By characterizing the state transition of each access request, the network steady-state points are obtained as the non-zero roots of the single fixed-point equation of the limiting probability of successful transmission of access requests. The complexity is independent of the number of MTDs even with the queueing behavior of each MTD taken into consideration, which is highly attractive in the massive access scenario.

To evaluate the access efficiency, the network throughput is further characterized, and optimized by properly choosing the backoff parameters including the Access Class Barring (ACB) factor and the Uniform Backoff (UB) window size. The analysis reveals that the maximum network throughput is solely determined by the number of preambles, and can be achieved by either tuning the ACB factor or the UB window size based on statistical information such as the traffic input rate of each MTD. Simulation results corroborate that with the optimal tuning of backoff parameters, the network throughput can remain at the highest level regardless of how many MTDs in the network, and is robust against feedback errors of the traffic input rate and burstiness of data arrivals.

กก

Lin Dai, "Stability and Delay Analysis of Buffered Aloha Networks," IEEE Trans. Wireless Commun., vol. 11, no. 8, pp. 2707-2719, Aug. 2012.

Lin Dai, "Toward a Coherent Theory of CSMA and Aloha," IEEE Trans. Wireless Commun., vol. 12, no. 7, pp. 3428-3444, July 2013.

Wen Zhan and Lin Dai, "Massive Random Access of Machine-to-Machine Communications in LTE Networks: Modeling and Throughput Optimization," IEEE Trans. Wireless Commun., vol. 17, no. 4, pp. 2771-2785, Apr. 2018.